21 research outputs found

    'Arranged' Marriage, Co-Residence and Female Schooling: a Model with Evidence from India

    Get PDF
    We model the consequences of parental control over choice of wives for sons, for parental incentives to educate daughters, when the marriage market exhibits competitive dowry payments and altruistic but paternalistic parents benefit from having married sons live with them. By choosing uneducated brides, some parents can prevent costly household partition. Paternalistic self-interest consequently generates low levels of female schooling in the steady state equilibrium. State payments to parents for educating daughters fail toraise female schooling levels. Policies (such as housing subsidies) that promote nuclear families, interventions against early marriages, and state support to couples who marry against parental wishes, are however all likely to improve female schooling. We offer evidence from India consistent with our theoretical analysis.Arranged marriage, Dowry, Bride price, Female literacy, Marriage markets, Stable marriage allocation.

    The Rate of Convergence of AdaBoost

    Get PDF
    The AdaBoost algorithm was designed to combine many "weak" hypotheses that perform slightly better than random guessing into a "strong" hypothesis that has very low error. We study the rate at which AdaBoost iteratively converges to the minimum of the "exponential loss." Unlike previous work, our proofs do not require a weak-learning assumption, nor do they require that minimizers of the exponential loss are finite. Our first result shows that at iteration tt, the exponential loss of AdaBoost's computed parameter vector will be at most Ï”\epsilon more than that of any parameter vector of ℓ1\ell_1-norm bounded by BB in a number of rounds that is at most a polynomial in BB and 1/Ï”1/\epsilon. We also provide lower bounds showing that a polynomial dependence on these parameters is necessary. Our second result is that within C/Ï”C/\epsilon iterations, AdaBoost achieves a value of the exponential loss that is at most Ï”\epsilon more than the best possible value, where CC depends on the dataset. We show that this dependence of the rate on Ï”\epsilon is optimal up to constant factors, i.e., at least Ω(1/Ï”)\Omega(1/\epsilon) rounds are necessary to achieve within Ï”\epsilon of the optimal exponential loss.Comment: A preliminary version will appear in COLT 201

    ‘Arranged’ Marriage, Co-Residence and Female Schooling: A Model with Evidence from India

    Get PDF
    We model the consequences of parental control over choice of wives for sons, for parental incentives to educate daughters, when the marriage market exhibits competitive dowry payments and altruistic but paternalistic parents benefit from having married sons live with them. By choosing uneducated brides, some parents can prevent costly household partition. Paternalistic self-interest consequently generates low levels of female schooling in the steady state equilibrium. State payments to parents for educating daughters fail to raise female schooling levels. Policies (such as housing subsidies) that promote nuclear families, interventions against early marriages, and state support to couples who marry against parental wishes, are however all likely to improve female schooling. We offer evidence from India consistent with our theoretical analysis.arranged marriage, dowry, bride price, female literacy, marriage markets, stable marriage allocation

    The Rate of Convergence of AdaBoost

    Get PDF
    The AdaBoost algorithm was designed to combine many “weak” hypotheses that perform slightly better than random guessing into a “strong” hypothesis that has very low error. We study the rate at which AdaBoost iteratively converges to the minimum of the “exponential loss”. Unlike previous work, our proofs do not require a weak-learning assumption, nor do they require that minimizers of the exponential loss are finite. Our first result shows that the exponential loss of AdaBoost's computed parameter vector will be at most Δ more than that of any parameter vector of ℓ[subscript 1]-norm bounded by B in a number of rounds that is at most a polynomial in B and 1/Δ. We also provide lower bounds showing that a polynomial dependence is necessary. Our second result is that within C/Δ iterations, AdaBoost achieves a value of the exponential loss that is at most Δ more than the best possible value, where C depends on the data set. We show that this dependence of the rate on Δ is optimal up to constant factors, that is, at least Ω(1/Δ) rounds are necessary to achieve within Δ of the optimal exponential loss.National Science Foundation (U.S.) (Grant IIS-1016029)National Science Foundation (U.S.) (Grant IIS-1053407

    North, South, East, West: What's best? Modern RTAs and Their Implications for the Stability of Trade Policy

    Full text link

    'Arranged' Marriage, Dowry and Female Literacy in a Transitional Society

    Full text link

    Game theory and optimization in boosting

    No full text
    Boosting is a central technique of machine learning, the branch of artificial intelligence concerned with designing computer programs that can build increasingly better models of reality as they are presented with more data. The theory of boosting is based on the observation that combining several models with low predictive power can often lead to a significant boost in the accuracy of the combined meta-model. This approach, introduced about twenty years ago, has been a prolific area of research, and has proved immensely successful in practice. However, despite extensive work, many basic questions about boosting remain unanswered. In this thesis, we increase our understanding of three such theoretical aspects of boosting. In Chapter 2 we study the convergence properties of the most well known boosting algorithm, AdaBoost. Rate bounds for this important algorithm are known for only special situations that rarely hold in practice. Our work guarantees fast rates hold under all situatons, and the bounds we provide are optimal. Apart from being important for practitioners, this bound also has implications for the statistical properties of AdaBoost. Like AdaBoost, most boosting algorithms are used for classification tasks, where the object is to create a model that can categorize relevant input data into one of a finite number of different classes. The most commonly studied setting is binary classification, when there are only two possible classes, although the tasks arising in practice are almost always multiclass in nature. In Chapter 3 we provide a broad and general framework for studying boosting for multiclass classification. Using this approach, we are able to identify for the first time the minimum assumptions under which boosting the accuracy is possible in the multiclass setting. Such theory existed previously for boosting for binary classification, but straightforward extensions of that to the multiclass setting lead to assumptions that are either too strong or too weak for boosting to be effectively possible. We also design boosting algorithms using these minimal assumptions, which work in more general situations than previous algorithms that assumed too much. In the final chapter, we study the problem of learning from expert advice which is closely related to boosting. The goal is to extract useful advice from the opinions of a group of experts even when there is no consensus among the experts themselves. Although algorithms for this task enjoying excellent guarantees have existed in the past, these were only approximately optimal, and exactly optimal strategies were known only when the experts gave binary ``yes/no'' opinions. Our work derives exactly optimal strategies when the experts provide probabilistic opinions, which can be more nuanced than deterministic ones. In terms of boosting, this provides the optimal way of combining individual models that attach confidence rating to their predictions indicating predictive quality
    corecore